‘Azerbaijan is losing in the AI race’ | INTERVIEW   

  04 October 2024    Read: 420
 ‘Azerbaijan is losing in the AI race’ | INTERVIEW     

‘Unfortunately, I see Azerbaijan only as a consumer of Artificial Intelligence. Our policy in the field and the steps by certain institutions give off an impression of a consumer behaviour. What’s crucial for moving forward is supporting ideas’, says Elvin Abbasov, hi-tech and AI expert, in his interview to AzVision.az. He adds that ideas of the young generation on AI and other high-tech fields do not find any support in Azerbaijan. Various funds prefer investing in real businesses. Young entrepreneurship is not much appreciated. 

‘When our institutions announce projects on developing AI, a closer look reveals that their approach is to simply connect to OpenAI’s ChatGPT, send data and receive an analysed and processed response. This means, we do not have our own AI model. We are still acting like consumers, only creating the visibility of the program.’ 

Abbasov says we must start building an AI through producing useful content first. People must, at some point, see that it is impossible to develop a country by only using social media for having a bit of fun. Everybody should produce useful content in their fields of expertise: 

‘Why can’t we build an AI in Azerbaijan? Because we don’t have valuable content. Everybody can build one today. Neural network models are available in open sources on the internet. An AI needs databases to learn. Since there are none in Azerbaijan, AI models cannot be advanced. 

OpenAI tapped into Wikipedia in English while developing ChatGPT. Building an AI requires useful content, which we call big data. I would simply like to compare Azerbaijan and Armenia on Wikipedia. The Armenians have produced around 300,000 articles with 150-200 thousand users, while Azerbaijan has 200,000 pieces of content and 300,000 users. And those 200 thousand pieces of useful content are a recent achievement. Only 30 out of those 300 thousand users are active, producing the valuable content. 

We, as a nation, must strive to create valuable information. This is not the responsibility of government agencies. We, as members of the society, must display individual effort to serve our national interests. There is an urgent need for content in Azerbaijani to build an AI. This is not expected from the governments anywhere in the world. Some issues are a social responsibility, and the society must rise to the occasion. People must abandon criticism for innovation, quit complaints and take up production. We must start producing something on our own. Complaining just means marking time. It’s time to roll up our sleeves and get down to business. We must start creating so that we can leave a legacy for our future generations consisting of more than just some jokes and gossip.’

- Can the AI be used to produce disinformation and manipulate public socio-political thought patterns? 

‘It is not possible directly. Say, we want to spread disinformation, but to mask it as an academic source. We feed an AI the necessary prompts and produce an article on the topic. We add five true facts and a bit of fiction and start spreading it. If one can access four or five verified bits of information, they will not check the sixth for credibility. We can then publish the concoction on Wikipedia, referring to the previously spread bit of disinformation as a source, thus even elevating it to an academic level. 

Disinformation is believed to be the main motive behind major hybrid wars today. We have been witnessing new motives to confuse the society, divert its attention and describe non-existent events as real. AI is a powerful player on this plane, which means we can have it produce any scenario. Be that as it may, AI still will not be able to produce a 150-page scientific work. It is not a creative force. 

Meanwhile, AI can accurately analyse and process many things that we may have overlooked. It is a tool that analyses millions of sources simultaneously, finds patterns and puts together a new product. It is devoid of any emotions, prejudice or feelings. It does not carry human sentiments; it is simply a force of analysis and processing. It is a helpful assistant for the humankind.’ 

- Elon Musk and Steve Wozniak have both made alarming statements on the importance of enforcing strict control in the field. Do these alarms have sound ground? Or are we wrong in fearing that chaotic and uncontrolled use of AI can pose a serious threat? 

‘There is no clear ‘yes’ or ‘no’ to this question. It all depends on whether humans develop a safe or dangerous AI. It is an instrument that needs constant clearing and editing, electricity and power. How will AI feed and manage itself without human touch? If AI sets off to destroy our kind, it will end up destroying itself. On the other hand, biased behaviour by an AI would require it to have some kind of feelings. 

Elon Musk initially took part in OpenAI when it was first launched, but later withdrew. He then made several small attempts to develop his own AI. Musk is someone, who wants 100% of the success for himself and does not like sharing it. Therefore, I believe Musk might have simply wanted to gear down his competitors’ advances with this statement. 

I believe Bill Gates has the most rational approach in this context: ‘The biggest company in the future will be one, who builds the best AI assistant’. He sees AI simply as an assistant, not as capital or human.’

- Can states join efforts to overcome threats that may arise from AI misuse? Or is every man for himself? 

‘There are international standards and institutions for any given product. For instance, the International Organization for Standardization cooperates with all countries, including our AzStandart. A digital or physical product must be certified and referenced before reaching the consumer and the international level. Otherwise, it will fail to find a place in the market. There is a standard to measure everything. 

Open fields are followed by less fraud and abuse due to tighter control. AI is also an open source with strict control. Turning AI into a weapon is no easy feat. If one attempts to, there will be 10 others to reveal the wrongs and demand correction. 

The dangerous content you mentioned is produced on an individual level. These are products created by special fraud gangs and cyber fraudsters. Delivering statements that will deter the larger public from AI is simply incorrect, as it is a friendly tool. It is extremely useful for more efficient work. Those, who cannot employ AI in their work, will be in the disadvantage in the future. I strongly believe that everyone must learn to use it. While social networks play the central part in our lives today, the role will be delivered to AI quite soon. We mustn’t fear it, because AI is not the enemy, but a friend.’

Sahil Isgandarov 


More about:


News Line